A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
已经对蜘蛛/莎拉/风暴等方差降低技术进行了广泛的研究,以提高随机非凸优化的收敛速率,这些优化通常维护和更新跨迭代中单个函数的估计器序列。 {\如果我们需要在迭代中跟踪多个功能映射,但是只有访问$ \ Mathcal {o}的随机样品(1)$在每次迭代时$ functional映射?}在解决一个新兴的家族时,有一个重要的应用程序以$ \ sum_ {i = 1}^m f_i(g_i(\ mathbf {w}))的形式形式的耦合组合优化问题,其中$ g_i $可通过随机甲骨文访问$ g_i $。关键问题是跟踪和估计$ \ mathbf g(\ mathbf {w})=(g_1(\ mathbf {w}),\ ldots,g_m(\ mathbf {w})$ $ \ mathbf g(\ mathbf {w})$具有$ m $块,只允许探测$ \ mathcal {o}(1)$块才能达到其随机值和雅各布人。为了提高解决这些问题的复杂性,我们提出了一种新型随机方法,称为多块单个探针差异(MSVR)估计器,以跟踪$ \ mathbf g(\ mathbf {w})$的序列。它的灵感来自风暴,但引入了定制的误差校正术语,不仅可以减轻所选块的随机样品中的噪声,而且还可以减轻那些未进行采样的块中的噪声。在MSVR估计器的帮助下,我们开发了几种算法来解决上述组成问题,并在具有非convex/convex/convex/strank strank convex目标的各种设置中具有改善的复杂性。我们的结果在几个方面都改善了先前的结果,包括样本复杂性和对强凸参数的依赖。多任务深度AUC最大化的经验研究表明,使用新估计器的性能更好。
translated by 谷歌翻译
基于参考的线路上色是计算机视觉中的一项具有挑战性的任务。颜色,纹理和阴影是根据抽象草图渲染的,该草图在很大程度上依赖于草图和参考之间的精确远程依赖模型。桥接跨模式信息并建模远程依赖性的流行技术采用了注意机制。但是,在基于参考的线路颜色化的背景下,几种技术将加剧现有的注意力训练困难,例如,自我监督的培训方案和基于GAN的损失。为了了解训练的不稳定,我们检测到注意力的梯度流并观察到注意力分支之间的梯度冲突。这种现象激发了我们通过在消除冲突阶段的同时保留主导梯度分支来减轻梯度问题。我们提出了一种使用这种训练策略,定格梯度注意(SGA)的新型注意机制,通过较大的边缘和更好的训练稳定性优于基线。与最新的线艺术色彩中的最新模块相比,我们的方法表明,FR \'Echet Inception距离(FID,最高27.21%)和结构相似性指数量度(SSIM,高达25.67%)的显着改善。几个基准。 SGA代码可从https://github.com/kunkun0w0/sga获得。
translated by 谷歌翻译
在本文中,我们专注于探索有效的方法,以更快,准确和域的不可知性语义分割。受到相邻视频帧之间运动对齐的光流的启发,我们提出了一个流对齐模块(FAM),以了解相邻级别的特征映射之间的\ textit {语义流},并将高级特征广播到高分辨率特征有效地,有效地有效。 。此外,将我们的FAM与共同特征的金字塔结构集成在一起,甚至在轻量重量骨干网络(例如Resnet-18和DFNET)上也表现出优于其他实时方法的性能。然后,为了进一步加快推理过程,我们还提出了一个新型的封闭式双流对齐模块,以直接对齐高分辨率特征图和低分辨率特征图,在该图中我们将改进版本网络称为SFNET-LITE。广泛的实验是在几个具有挑战性的数据集上进行的,结果显示了SFNET和SFNET-LITE的有效性。特别是,建议的SFNET-LITE系列在使用RESNET-18主链和78.8 MIOU以120 fps运行的情况下,使用RTX-3090上的STDC主链在120 fps运行时,在60 fps运行时达到80.1 miou。此外,我们将四个具有挑战性的驾驶数据集(即CityScapes,Mapillary,IDD和BDD)统一到一个大数据集中,我们将其命名为Unified Drive细分(UDS)数据集。它包含不同的域和样式信息。我们基准了UDS上的几项代表性作品。 SFNET和SFNET-LITE仍然可以在UDS上取得最佳的速度和准确性权衡,这在如此新的挑战性环境中是强大的基准。所有代码和模型均可在https://github.com/lxtgh/sfsegnets上公开获得。
translated by 谷歌翻译
由生物学进化的动机,本文通过类比与经过验证的实践进化算法(EA)相比,解释了视觉变压器的合理性,并得出了两者都具有一致的数学表述。然后,我们受到有效的EA变体的启发,我们提出了一个新型的金字塔饮食式主链,该主链仅包含拟议的\ emph {ea-ea-lase transformer}(eat)块,该块由三个残留零件组成,\ ie,\ emph {多尺度区域聚集}(msra),\ emph {global and local互动}(GLI)和\ emph {feed-forward Network}(ffn)模块,以分别建模多尺度,交互和个人信息。此外,我们设计了一个与变压器骨架对接的\ emph {与任务相关的头}(TRH),以更灵活地完成最终信息融合,并\ emph {reviv} a \ emph {调制变形MSA}(MD-MSA),以动态模型模型位置。关于图像分类,下游任务和解释性实验的大量定量和定量实验证明了我们方法比最新方法(SOTA)方法的有效性和优越性。 \例如,我们的手机(1.8m),微小(6.1m),小(24.3m)和基地(49.0m)型号达到了69.4、78.4、83.1和83.9的83.9 TOP-1仅在Imagenet-1 K上接受NAIVE训练的TOP-1食谱; Eatformer微型/小型/基本武装面具-R-CNN获得45.4/47.4/49.0盒AP和41.4/42.9/44.2掩膜可可检测,超过当代MPVIT-T,SWIN-T,SWIN-T和SWIN-S,而SWIN-S则是0.6/ 1.4/0.5盒AP和0.4/1.3/0.9掩码AP分别使用较少的拖鞋;我们的Eatformer-small/base在Upernet上获得了47.3/49.3 MIOU,超过Swin-T/S超过2.8/1.7。代码将在\ url {https://https://github.com/zhangzjn/eatformer}上提供。
translated by 谷歌翻译
以前的多任务密集预测研究开发了复杂的管道,例如在多个阶段进行多模式蒸馏或为每个任务寻找任务关系上下文。这些方法以外的核心洞察力是最大程度地利用每个任务之间的相互作用。受到最近基于查询的变压器的启发,我们提出了一条更简单的管道,称为Multi-Querti-Transformer(MQTRANSFORMER),该管道配备了来自不同任务的多个查询,以促进多个任务之间的推理并简化交叉任务管道。我们没有在不同任务之间建模每个像素上下文的密集上下文,而是寻求特定于任务的代理,以通过每个查询编码与任务相关的上下文进行编码的多个查询执行交叉任务推理。 MQTRANSFORMER由三个关键组件组成:共享编码器,交叉任务注意和共享解码器。我们首先将每个任务与任务相关且具有比例意识的查询对每个任务进行建模,然后将功能提取器的图像功能输出和与任务相关的查询功能都馈入共享编码器,从而从图像功能中编码查询功能。其次,我们设计了一个交叉任务注意模块,以从两个角度来推理多个任务和特征量表之间的依赖项,包括相同尺度的不同任务和同一任务的不同尺度。然后,我们使用共享解码器逐渐使用来自不同任务的合理查询功能来逐步完善图像功能。对两个密集的预测数据集(NYUD-V2和Pascal-Context)的广泛实验结果表明,该方法是一种有效的方法,并实现了最新结果。
translated by 谷歌翻译
全景部分分割(PPS)旨在将泛型分割和部分分割统一为一个任务。先前的工作主要利用分离的方法来处理事物,物品和部分预测,而无需执行任何共享的计算和任务关联。在这项工作中,我们旨在将这些任务统一在架构层面上,设计第一个名为Panoptic-Partformer的端到端统一方法。特别是,由于视觉变压器的最新进展,我们将事物,内容和部分建模为对象查询,并直接学会优化所有三个预测作为统一掩码的预测和分类问题。我们设计了一个脱钩的解码器,以分别生成零件功能和事物/东西功能。然后,我们建议利用所有查询和相应的特征共同执行推理。最终掩码可以通过查询和相应特征之间的内部产品获得。广泛的消融研究和分析证明了我们框架的有效性。我们的全景局势群体在CityScapes PPS和Pascal Context PPS数据集上实现了新的最新结果,至少有70%的GFLOPS和50%的参数降低。特别是,在Pascal上下文PPS数据集上采用SWIN Transformer后,我们可以通过RESNET50骨干链和10%的改进获得3.4%的相对改进。据我们所知,我们是第一个通过\ textit {统一和端到端变压器模型来解决PPS问题的人。鉴于其有效性和概念上的简单性,我们希望我们的全景贡献者能够充当良好的基准,并帮助未来的PPS统一研究。我们的代码和型号可在https://github.com/lxtgh/panoptic-partformer上找到。
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
最近提出的深度感知视频Panoptic分段(DVPS)旨在预测视频中的Panoptic分段结果和深度映射,这是一个具有挑战性的场景理解问题。在本文中,我们提供了多相变压器,揭示了DVPS任务下的所有子任务。我们的方法通过基于查询的学习探讨了深度估计与Panoptic分割的关系。特别是,我们设计三个不同的查询,包括查询,填写询问和深度查询的东西。然后我们建议通过门控融合来学习这些查询之间的相关性。从实验中,我们从深度估计和Panoptic分割方面证明了我们设计的好处。由于每个物品查询还对实例信息进行了编码,因此通过具有外观学习的裁剪实例掩码功能来执行跟踪是自然的。我们的方法在ICCV-2021 BMTT挑战视频+深度轨道上排名第一。据报道,消融研究表明我们如何提高性能。代码将在https://github.com/harboryuan/polyphonicformer提供。
translated by 谷歌翻译